Skip to content

Add MoE config files for Nvidia Pro 6000 Blackwell Workstation Edition #20329

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Chen-zexi
Copy link

@Chen-zexi Chen-zexi commented Jul 1, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Add MoE config files for Nvidia Pro 6000 Blackwell Workstation Edition
Generated using:
python benchmarks/kernels/benchmark_moe.py --model Qwen/Qwen3-30B-A3B --tune

Test Plan

No test needed

Test Result

Program can identify the config files for Nvidia Pro 6000 Blackwell Workstation Edition GPU when loading Qwen3-30B-A3B models

Copy link

github-actions bot commented Jul 1, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Chen-zexi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a set of new configuration files to support and optimize Mixture-of-Experts (MoE) model execution on the Nvidia RTX Pro 6000 Blackwell Workstation Edition GPU. These additions aim to enhance the performance and compatibility of the system when running large language models that utilize MoE architectures, such as Qwen3-30B-A3B, by providing pre-tuned kernel parameters for various expert and neuron configurations.

Highlights

  • New MoE Configuration Files: I've added several new JSON configuration files specifically for Mixture-of-Experts (MoE) operations on the Nvidia RTX Pro 6000 Blackwell Workstation Edition GPU.
  • Performance Optimization: These configurations contain optimized kernel parameters (e.g., BLOCK_SIZE_M, BLOCK_SIZE_N, num_warps) for various MoE dimensions, generated to improve performance when loading models like Qwen3-30B-A3B.
  • Expanded GPU Support: This PR expands the range of supported hardware by providing pre-tuned MoE configurations for a new GPU, ensuring better compatibility and efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds MoE kernel configurations for the 'NVIDIA RTX PRO 6000 Blackwell Workstation Edition' GPU. The changes consist of four new JSON configuration files, which appear to be auto-generated by the benchmarking script.

My main concern is that the configuration file for E=64,N=768 is identical to the one for E=128,N=768. This is suspicious and might be a copy-paste error, as the optimal kernel parameters usually depend on the number of experts (E). Please verify the correctness of these configurations.

Comment on lines 1 to 146
"num_warps": 4,
"num_stages": 2
},
"32": {
"BLOCK_SIZE_M": 16,
"BLOCK_SIZE_N": 64,
"BLOCK_SIZE_K": 128,
"GROUP_SIZE_M": 1,
"num_warps": 4,
"num_stages": 3
},
"48": {
"BLOCK_SIZE_M": 32,
"BLOCK_SIZE_N": 64,
"BLOCK_SIZE_K": 128,
"GROUP_SIZE_M": 1,
"num_warps": 8,
"num_stages": 2
},
"64": {
"BLOCK_SIZE_M": 16,
"BLOCK_SIZE_N": 64,
"BLOCK_SIZE_K": 128,
"GROUP_SIZE_M": 32,
"num_warps": 4,
"num_stages": 3
},
"96": {
"BLOCK_SIZE_M": 16,
"BLOCK_SIZE_N": 32,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 64,
"num_warps": 4,
"num_stages": 4
},
"128": {
"BLOCK_SIZE_M": 32,
"BLOCK_SIZE_N": 32,
"BLOCK_SIZE_K": 128,
"GROUP_SIZE_M": 32,
"num_warps": 4,
"num_stages": 3
},
"256": {
"BLOCK_SIZE_M": 32,
"BLOCK_SIZE_N": 32,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 32,
"num_warps": 4,
"num_stages": 5
},
"512": {
"BLOCK_SIZE_M": 32,
"BLOCK_SIZE_N": 64,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 16,
"num_warps": 4,
"num_stages": 4
},
"1024": {
"BLOCK_SIZE_M": 64,
"BLOCK_SIZE_N": 64,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 16,
"num_warps": 8,
"num_stages": 4
},
"1536": {
"BLOCK_SIZE_M": 128,
"BLOCK_SIZE_N": 64,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 16,
"num_warps": 8,
"num_stages": 4
},
"2048": {
"BLOCK_SIZE_M": 64,
"BLOCK_SIZE_N": 128,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 1,
"num_warps": 8,
"num_stages": 3
},
"3072": {
"BLOCK_SIZE_M": 256,
"BLOCK_SIZE_N": 128,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 1,
"num_warps": 8,
"num_stages": 3
},
"4096": {
"BLOCK_SIZE_M": 128,
"BLOCK_SIZE_N": 256,
"BLOCK_SIZE_K": 64,
"GROUP_SIZE_M": 1,
"num_warps": 8,
"num_stages": 3
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This configuration file appears to be an exact copy of the one for E=128,N=768. It's highly unlikely that the optimal kernel configurations for E=64 and E=128 are identical across all batch sizes, as the number of experts (E) is a significant factor in performance tuning. Please verify if this is intentional or a potential copy-paste error.

Copy link
Contributor

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the config!

@Chen-zexi
Copy link
Author

Hi @yewentao256, thanks for reviewing my PR! As the gemini-code-assist point out, configuration file for E=64,N=768 may not be optimal as it was copied from the one for E=128,N=768. The benchmark script currently does not take expert-parallel into consideration therefore I wasn't able to generate a native configuration file for E=64,N=768. Is this indeed the case for current benchmark script and should I write custom script to find optimal config when loading model with --enable-expert-parallel?

@mgoin
Copy link
Member

mgoin commented Jul 1, 2025

@Chen-zexi I wasn't aware that EP hadn't been added to benchmark_moe.py.. would you be willing to add support for this?

@Chen-zexi
Copy link
Author

@Chen-zexi I wasn't aware that EP hadn't been added to benchmark_moe.py.. would you be willing to add support for this?

Sure, I'd happy to.

@mgoin
Copy link
Member

mgoin commented Jul 2, 2025

In the meantime, can you show some e2e benchmarks that these configs actually make the deployment faster in your hardware?

@Chen-zexi
Copy link
Author

Hi @mgoin, I am getting the following results running the command below:

python3 benchmarks/benchmark_throughput.py \
    --model Qwen/Qwen3-30B-A3B \
    --tensor-parallel-size 2 \
    --dataset-name sonnet \
    --dataset-path benchmarks/sonnet.txt \
    --max-num-batched-tokens 4096
    --num-prompts 100

Without tuned config:

INFO 07-02 11:37:18 [metrics.py:417] Avg prompt throughput: 10147.5 tokens/s, Avg generation throughput: 1061.8 tokens/s, Running: 100 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 5.0%, CPU KV cache usage: 0.0%.

INFO 07-02 11:37:22 [multiproc_worker_utils.py:260] Worker exiting
Throughput: 11.16 requests/s, 7729.43 total tokens/s, 1674.10 output tokens/s
Total num prompt tokens:  54256
Total num output tokens:  15000

With tuned config:

INFO 07-02 11:43:38 [metrics.py:417] Avg prompt throughput: 10822.0 tokens/s, Avg generation throughput: 1371.7 tokens/s, Running: 100 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 5.1%, CPU KV cache usage: 0.0%.

INFO 07-02 11:43:42 [multiproc_worker_utils.py:260] Worker exiting
Throughput: 11.13 requests/s, 7708.03 total tokens/s, 1669.46 output tokens/s
Total num prompt tokens:  54256
Total num output tokens:  15000

I assume it means higher throughput when using the tuned config during runtime, however overall throughputs are pretty identical.

Might worth mention, I am seeing lower throughputs when EP is enabled:

INFO 07-02 11:49:38 [metrics.py:417] Avg prompt throughput: 10800.5 tokens/s, Avg generation throughput: 970.8 tokens/s, Running: 100 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 4.9%, CPU KV cache usage: 0.0%.

INFO 07-02 11:49:42 [multiproc_worker_utils.py:260] Worker exiting
Throughput: 10.55 requests/s, 7303.80 total tokens/s, 1581.91 output tokens/s
Total num prompt tokens:  54256
Total num output tokens:  15000

Does these number look right to you?

Signed-off-by: Alan Chen <zc2610@nyu.edu>
@Chen-zexi
Copy link
Author

Hi @mgoin , I made a PR #20501 with EP support to benchmark_moe.py
Below is some result I got:

Metric No EP
tuned config
No EP
default config
EP
tuned config
EP
defult config
Successful requests 1 000 1 000 1 000 1 000
Benchmark duration (s) 18.07 18.55 18.44 19.05
Total input tokens 542 989 542 989 542 989 542 989
Total generated tokens 150 000 150 000 150 000 150 000
Request throughput (req/s) 55.33 53.90 54.22 52.51
Output token throughput (tok/s) 8 299.70 8 084.41 8 133.71 7 875.79
Total token throughput (tok/s) 38 344.00 37 349.37 37 577.17 36 385.59
Mean TTFT (ms) 1 370.36 1 382.63 1 487.07 1 556.65
Median TTFT (ms) 1 429.38 1 409.14 1 494.59 1 635.32
P99 TTFT (ms) 1 744.33 1 907.57 1 987.27 2 031.82
Mean TPOT (ms) 110.47 113.66 112.12 115.84
Median TPOT (ms) 110.53 114.06 112.54 115.91
P99 TPOT (ms) 111.93 115.06 113.27 118.33
Mean ITL (ms) 110.47 113.66 112.12 115.84
Median ITL (ms) 108.05 111.86 109.89 115.16
P99 ITL (ms) 288.67 315.99 342.36 199.69

Benchmark result generated by:

python3 benchmarks/benchmark_serving.py --model Qwen/Qwen3-30B-A3B \
    --dataset-name sonnet \
    --dataset-path benchmarks/sonnet.txt \
    --num-prompts 1000 \
    --save-result

@Chen-zexi Chen-zexi requested a review from yewentao256 July 6, 2025 12:46
Copy link
Contributor

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Chen-zexi Thanks for the work! It is a liitle bit strange, I'd expected with EP tuned, the throughput result could be better. Could you figure out why this would happen? Recommend that testing with more GPUs like 4/8 or tested with different architectures.

@Chen-zexi
Copy link
Author

Hi @yewentao256, thank you for reviewing my PR. I tested with two Pro 6000 Blackwell Workstation GPUs on a consumer-grade motherboard, which only supports PCIe 5.0 x8 per GPU in a dual-GPU setup. As a result, the bottleneck is most likely due to limited bandwidth for GPU-to-GPU communication—this is especially the case for EP. I expect performance would improve significantly with GPUs that support NVLink, but unfortunately, I do not have access to additional GPUs to test this further.

The configuration files I uploaded should be fine for now, as this particular GPU does not support NVLink anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants